Reality Check

 

Great Expectations

and the “Reality” of Simulation

 

Joseph P. McFadden Sr.

Engineering Fellow, Mechanical Engineering Analysis & Services — Zebra Technologies

Adjunct Professor, Fairfield University  |  McFaddenCAE.com

 

AI Collaborators: Claude (Anthropic), ChatGPT (OpenAI), Grok (xAI)

 

Charles Dickens gave us a young man named Pip who built his entire life around a set of expectations that turned out to be something other than what he imagined. The money, the status, the certainty of a particular future — all of it rested on assumptions that felt completely solid until, one by one, they did not.

I have been thinking about that novel lately in the context of structural simulation. Not because simulation is built on illusions — it is not. It is built on rigorous physics, decades of validated methodology, and the genuine expertise of the people who practice it. But because the gap between what simulation promises and what it can actually deliver is one of the most important conversations we are not having loudly enough in engineering, product development, and the customer relationships that depend on both.

After forty-four years in materials engineering — simulation, failure analysis, fracture mechanics, expert witness work, and a few decades of teaching it all — I have come to believe that the most valuable thing an experienced analyst brings to the work is not technical skill alone. It is the honesty to explain what the numbers mean, and equally important, what they do not mean.

This is that conversation.

The most dangerous result in simulation is not a wrong prediction. It is a right answer to the wrong question, delivered with more confidence than the physics supports.

 

The Statistician Who Said It Best

George Box, one of the great statisticians of the twentieth century, left us a sentence that every engineer should have somewhere visible: “all models are wrong, but some are useful.”

It is not cynicism. It is the most precise possible description of what a simulation is. A structural analysis is a mathematical representation of a physical object. The geometry comes from a CAD model representing design intent, not the manufactured part. The material properties come from test data representing the average behavior of a population of specimens — not the specific batch of resin or metal that will be in production. The loads come from a specification representing an envelope of expected conditions, not a precise prediction of every event the product will experience.

Every one of those approximations introduces uncertainty. The competent analyst manages that uncertainty systematically, designs with appropriate conservatism, and communicates honestly about what the results can and cannot support. The result is not certainty. It is a physics-informed estimate that, built with skill and validated against physical evidence, supports better decisions than any alternative available at the same stage of development.

That is a profound capability. It is also a bounded one. And the boundary matters.

 

Glass: When a ‘Pass’ Is Not What It Sounds Like

Let me give you the most instructive example I know: glass fracture.

Glass does not yield before it breaks. Unlike most metals and many plastics, glass has no ability to plastically deform and redistribute stress. When the stress at any point in the material reaches a critical level, fracture is essentially instantaneous. The crack initiates at a surface flaw — a microscopic scratch, a handling nick, a contact event — and propagates across the part before you can perceive it happening.

The key word is “flaw.” The strength of a glass component is not a single fixed number. It is a distribution, governed by the size, shape, and orientation of the surface flaws present on that particular piece of glass at that particular moment. Two pieces cut from the same sheet, made to the same specification, handled identically, can have meaningfully different strengths — because the flaw distributions on their surfaces are different. This is not a manufacturing defect. It is a fundamental physical property of glass as a material.

The mathematical framework that describes this behavior is called Weibull statistics. And the engineering threshold most commonly used in glass simulation is called the B10 value.

The B10 value is the stress level at which ten percent of a population of glass specimens would be expected to fracture. When a simulation reports that a glass component ‘passes,’ it typically means that the predicted stress under the modeled loading conditions is below that threshold.

Read that carefully: ten percent. A pass based on the B10 value does not mean no glass will break. It means the design is operating below the level at which one in ten pieces would be expected to fail. For a product shipped in volumes of tens or hundreds of thousands, even a fraction of a percent of field breakage represents a significant customer experience and warranty problem.

There is more. The simulation models the nominal chemically strengthened glass — the designed compressive surface layer that resists crack initiation. What the simulation cannot fully capture is the degradation of that protective layer over time, the variability in its depth and magnitude from unit to unit, or the effect of a sharp corner impact on a rough surface that breaches the compressive zone in a way no simulation predicted.

The practical implication is not that glass analysis is useless. It is indispensable. It guides the design toward configurations where the stress is well below the B10 threshold, providing margin against the statistical tail of the strength distribution and against the service conditions that fall outside the nominal test envelope. But the product team and the program manager who understand the probabilistic nature of glass performance will set field expectations appropriately, monitor glass-related returns carefully, and not treat a simulation pass as a guarantee of zero breakage in any finite population of products.

A glass simulation that reports a pass is saying: under these conditions, the predicted stress is below the threshold where ten percent would fail. It is not saying no glass will break. That distinction matters at product scale.

 

Plastic Housings: The Component Has a History

Plastic housing performance is a different kind of challenge. Glass fracture is statistical in a way that is intrinsic to the material. Plastic housing performance is uncertain in a way that is inseparable from the full history of the component from the moment the resin left the supplier’s bag.

The simulation begins from a stress-free part with nominal material properties. The physical world delivers something considerably more complex.

Consider what happens before the part ever sees service loading. The resin is dried — or it is not dried quite properly, or the dryer temperature drifted, and the material’s molecular weight distribution shifts subtly. It is injected into a mold under enormous pressure. The flow front splits around bosses and holes and rejoins on the other side, creating weld lines — seams where the molecular structure and fiber orientation are significantly different from the surrounding material, where tensile strength can be fifty to eighty percent of the nominal value. The part cools against the mold walls, outer surfaces first, interior still molten, and the differential contraction creates residual stresses that were present before any service load was ever applied.

Then it is assembled. A screw drives into a boss, applying concentrated stress to a feature that may already be carrying residual stress from molding. A snap fit is deflected, applying strain that may approach the elastic limit of the material. An ultrasonic weld creates localized heat and pressure at a joint interface. The assembly arrives at service in a pre-loaded state that no simulation built from nominal, stress-free inputs captures.

A simulation built without Moldflow integration does not know where the weld lines are. A simulation that starts from nominal material properties does not capture the effect of marginal drying or high regrind content on impact toughness. A simulation that models ideal assembly does not reflect the process variation at the high end of the force tolerance.

None of this means the simulation is wrong in any simple sense. It means the simulation models the design intent, and the manufacturing process delivers the physical reality, and the gap between the two is managed through conservative design, rigorous process controls, and a physical test program that can catch what the model missed.

The competent analyst understands this gap intimately. They will design with additional conservatism at weld line locations when Moldflow data is not available. They will flag assembly features where the combination of process variation and service loading creates risk. They will recommend physical testing at temperature extremes, where the toughness reduction in many polymers is significant and the nominal room-temperature material properties are the most misleading.

The simulation models the design intent. The manufacturing process delivers the physical reality. The gap between them is not a simulation failure — it is the reason physical testing and process control exist.

 

What the Competent Analyst Actually Delivers

I want to be clear about something. Everything I have described so far is not an argument against simulation. It is an argument for using simulation with open eyes.

When I describe the limitations of glass fracture prediction or plastic housing analysis, I am describing the boundaries of any analytical tool applied to a complex physical problem. The experienced analyst who knows these boundaries delivers far better work because of that knowledge — not in spite of it. They build conservatively. They integrate complementary analyses like Moldflow into structural work. They validate against physical test data wherever it exists. They communicate uncertainty explicitly rather than presenting false precision. And they are honest with the program team about what additional testing or process characterization would increase confidence in the results.

When that process is followed faithfully, structural simulation predictions are very good. Cases where simulation and physical reality diverge significantly almost always trace back to a known risk assumption, a manufacturing variable outside the normal process envelope, or a use condition that was outside the qualified test envelope. These are not simulation failures. They are the expected outcomes of a probabilistic engineering world that no deterministic model can fully characterize.

The complete picture of a product’s structural performance comes from three sources working together:

•         The simulation that maps the physics of the design and identifies where the risks are before hardware exists.

•         The physical test program that validates the simulation’s predictions and discovers what the model missed.

•         The field monitoring program that closes the loop between the qualification environment and the real service environment.

Each of the three is incomplete without the others. A simulation without physical validation is an unconfirmed prediction. A physical test without simulation guidance may miss the right instrumentation, the right orientations, the right failure modes. A qualification program without field monitoring never knows whether the service environment matched the test envelope. Together, they give you the most defensible possible basis for confidence in your product.

 

The Conversation Worth Having

Pip’s problem in Great Expectations was not that his expectations were unreasonable given what he knew. It was that no one helped him understand the gap between his model of the world and the world itself — until the gap asserted itself on its own terms, at significant cost.

The equivalent failure mode in product engineering is the program team that treats a simulation pass as a guarantee, sets customer expectations accordingly, and then encounters the field reality that a physical population of products, in a real and variable service environment, does not behave exactly as a physics model predicted it would. The simulation was not wrong. The expectation was misaligned with what simulation can deliver.

The conversation I am calling for is not about lowering confidence in simulation. It is about calibrating that confidence correctly. A simulation that predicts a comfortable pass with good margin, validated against physical test data, covering the temperature extremes and the statistical variability of the material, is a very strong result. A simulation that predicts a marginal pass, built on nominal material properties at room temperature, with no physical correlation, is a weak result that looks identical in a project plan.

The engineering team knows the difference. Their customers and program managers should know it too. That shared understanding is what allows engineering to advocate for the additional test investment, the Moldflow integration, the cold-temperature evaluation, the statistically meaningful sample size — and for that advocacy to be heard rather than value-engineered away.

Forty-four years in this work have taught me that the engineers who build the best products are not the ones who are most optimistic about their predictions. They are the ones who are most honest about their uncertainties. They are the ones who say: here is what the analysis tells us, here is what it cannot tell us, and here is what we need to do to close the gap.

That honesty, consistently practiced, is what earns the trust of customers, program teams, and colleagues. It is also, in the end, what produces products that work — not just in the simulation, but in the hands of the people who depend on them.

The best simulation programs are not the ones with the most sophisticated models. They are the ones with the most honest conversations about what the models mean — and what they do not.

 

A Final Word

Dickens’ Pip eventually found his way to a more honest reckoning with the world as it was, rather than as he had imagined it to be. It cost him considerably to get there. The engineering lesson is that the honest reckoning is less costly — to the product, to the customer, and to the people doing the work — when it happens early, in the context of a rigorous analysis program, rather than late, in the context of a field failure investigation.

Great expectations of simulation are warranted. It is a remarkable capability that has transformed how we design and qualify products. Unlimited expectations are not warranted, and the experienced analyst will always tell you so.

That is not a limitation of the analysis. It is the mark of someone who has been doing this long enough to know the difference between what a model can say and what only the physical world can answer.

 

If this resonates, I would be glad to hear how you are navigating the simulation-reality gap on your own programs. Drop a comment or send me a message. These are conversations worth having out loud.

 

About the Author

Joseph P. McFadden Sr. is an Engineering Fellow at Zebra Technologies leading the Mechanical Engineering Analysis & Services (MEAS) team, and an adjunct professor at Fairfield University teaching fracture mechanics. With 44 years of materials engineering experience spanning failure analysis, CAE simulation, and expert witness work, he was one of three pioneers who brought Moldflow technology to North America. He publishes at McFaddenCAE.com under the tagline “Building Intuition Before Equations.”

AI Collaborators: This essay was developed with research and drafting support from Claude (Anthropic), ChatGPT (OpenAI), and Grok (xAI). All technical content, judgment calls, and the perspective expressed are the author’s own.

#StructuralSimulation #FractureAnalysis #EngineeringLeadership #MaterialsEngineering #ProductDevelopment #SimulationReality #CAE #MechanicalEngineering #FEA #WeibullStatistics #HolisticAnalysis

Next
Next

Reality Check on Drop Simulation